- Google is giving users bad, AI-generated answers. Again.
- In February, when this happened before, Google shelved the faulty AI product behind the results.
- But this time feels different — Google has basically committed to this idea as the future of the company.
Step 1: Google rolls out a new, AI-powered search product.
Step 2: Users quickly find the product's flaws, and point them out with social media posts, which become news stories.
Step 3: Google admits that its new, AI-powered search product is fundamentally flawed, and puts it on ice.
Yup, we've seen this drill before. Back in February, Google was shamed into shelving an image-generating feature for its AI chatbot.
Now we are two steps into the same process: Google is widely rolling out its "AI Overview" feature, which replaces its usual answer to search queries — a list of links to sites where you might find the actual answer you want — with an AI-generated answer that tries to summarize the content on those sites. And people are finding examples of Google generating answers that are wrong, and sometimes comically bad.
Which is why colleague Katie Notopoulos constructed, and then ate, a pizza made with glue. (Bless you, Katie! This is truly heroic stuff, and I hope you spend your hazard pay wisely. (We do get hazard pay, right?))
So here's the two trillion-dollar question: Is Google going to have to backtrack on this one, too?
No, says Google, which argues that the dumb answers it has been generating are few and far between. And that most people don't know or care about search answers that tell people how many rocks to eat. Or that you should stare into the sun for 5 to 15 minutes — unless you have darker skin, in which case you can go for twice as long.
And Google also notes that it is quickly swatting down Bad Answers as they crop up. Particularly ones where someone smart enough to use a phone but stupid enough to follow those answers could harm themselves.
Here's the formal version of that answer, via Google comms person Lara Levin:
"The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web. Many of the examples we've seen have been uncommon queries, and we've also seen examples that were doctored or that we couldn't reproduce. We conducted extensive testing before launching this new experience, and as with other features we've launched in Search, we appreciate the feedback. We're taking swift action where appropriate under our content policies, and using these examples to develop broader improvements to our systems, some of which have already started to roll out."
OK.
But like I've said. We've seen a version of this story before. What happens if people keep finding Bad Answers on Google, and Google can't whac-a-mole them fast enough? And, crucially — what if regular people, people who don't spend time reading or talking about tech news, start to hear about Google's Bad And Potentially Dangerous Answers?
Because that would be a really, really big problem. Google does a lot of different things, but the reason it's worth more than $2 trillion is still because of its two core products — search, and the ads that it generates alongside search results. And if people — normal people — lose confidence in Google as a search/answer machine …
Well, that would be a real problem.
Privately, Googlers are doubling down on the notion that these Bad Answers really are fringe problems. And that, unlike its "woke Google" problem from a few months ago — where there really was a problem with the model Google was using to create images — they say that's not the case here. Google never gets things 100% correct (they say even more quietly) because, in the end, it's still just relying on what people publish on the internet. It's just that some people are paying a lot more attention right now because there's a new thing to pay attention to.
I'm willing to believe that answer: I've been seeing Google's AI answers in my search results for about a month, and they're generally fine.
And the thing that's very different between the old Google results and the new ones is the responsibility and authority Google is shouldering. In the past, Google was telling you somebody else could answer your question. Now Google is answering your question.
It's the difference between me handing you a map and me giving you directions that will send your car barreling over a cliff.
You could argue, as my 15-year-old son does (we are weird people so we talk about this stuff at home) that Google shouldn't be replacing its perfectly fine olde-timey search results with AI-generated answers. If people wanted AI-generated answers, they'd go to ChatGPT, right?
But of course, people going to ChatGPT is what Google is worried about. Which is why it's making this major pivot — to disrupt itself before ChatGPT or other AI engines do.
You can argue that it's moving too fast, or too sloppily, or whatever. But it's hard to imagine Google walking this one back now.